![]() method for operating a display device, head-mounted display system and computer storage medium
专利摘要:
METHOD FOR OPERATING A DISPLAY DEVICE, DISPLAY SYSTEM MOUNTED ON THE HEAD AND METHOD FOR DISPLAYING AN IMAGE. The present invention relates to the presentation of images of shadows projected on the real world background by images of objects displayed on the transparent display screen (403). For example, a described embodiment provides a method of operating a display screen device having a transparent display screen (403). The method comprises displaying (302) on the transparent display screen (403) an image of an object, and while displaying the image of the object (200), displaying an image of a shadow projected (202) by the object (200 ) on the background scenery (104) by acquiring (304) an image of a background scenery (104), determining (306) the location of the shadow in the background scenery image (104), making a highlighted image of the scenery background (104) by increasing (316) the relative brightness in a region adjacent to the shadow compared to a brightness within the shadow, and displaying (318) the enhanced image of the background scenery (104). 公开号:BR112014014198B1 申请号:R112014014198-3 申请日:2012-12-12 公开日:2021-02-17 发明作者:Mathew Lamb 申请人:Microsoft Technology Licensing, Llc; IPC主号:
专利说明:
[0001] [0001] Various technologies can allow a user to experience a mixture of the real and virtual worlds. For example, some display devices, such as the various head-mounted display devices, may comprise display devices through which they allow the superposition of an image displayed on a real world background. In this way, images can be displayed so that they appear intermixed with elements at the bottom of the real world. summary [0002] [0002] Various modalities are described that refer to the presentation of images of virtual shadows cast on a real world background by virtual objects displayed on a transparent display screen system. For example, a described embodiment provides a method of operating a display screen device having a transparent display screen. The method comprises displaying an image of an object on the transparent display screen, and at the same time that it displays the image of the object, displaying an image of a virtual shadow cast by the object on the background. The image of the virtual shadow is displayed when acquiring an image of a background scenery, determining the location of the virtual shadow in the image of the background scenery, making a highlighted image of the background scenery by increasing a relative brightness in a region adjacent to the shade virtual compared to a glow within the shadow, and display the enhanced image of the background scenery. [0003] [0003] This summary is provided to introduce a selection of concepts in a simplified form which are further described below in the Detailed Description. Said summary is not intended to identify the key features or essential features of the claimed object, nor is it intended to be used to limit the scope of the claimed object. In addition, the claimed object is not limited to implementations that address any or all of the disadvantages observed elsewhere in this description. Brief Description of Drawings [0004] [0004] Figure 1 shows a modality of a transparent display screen system used by a user in an example of a usage environment. [0005] [0005] Figure 2 shows a perspective view of the user of Figure 1 viewing modalities of an object and an image of the virtual shadow projected by the object on a background scenario in the environment of use of Figure 1. [0006] [0006] Figure 3 shows a flow diagram illustrating a modality of a method of operating a transparent display screen system. [0007] [0007] Figure 4 shows a block diagram of a modality of a transparent display screen system. [0008] [0008] Figure 5 shows a perspective view of the embodiment of Figure 1. Detailed Description [0009] [0009] As mentioned above, display devices through them may allow the presentation of an image displayed on a real world background, so that objects in the displayed images may appear to mix with the real world background. However, several aspects of the referred image presentation can make the displayed objects appear less natural. For example, virtual shadows cast by real-world objects may be noticeably absent from the objects displayed on the display screen device. [0010] [00010] the generation of shadow images of virtual objects in a real world background can lead to several challenges. For example, where images are formed on the transparent viewfinder screen by means of projection or by means of a built-in emissive display technology (for example, a transparent organic light emitting device), the displayed images add additional light to the scene as seen by the user. In a different way, a virtual shadow is created by subtracting the light from a scene. Because these technologies do not allow light to be subtracted from the real world backdrop background, displaying the virtual shadow cast by a virtual object on the real world background can cause difficulties. [0011] [00011] Therefore, the described modalities are directed to the formation of a virtual shadow in a background scenery on a transparent display screen device by increasing the relative brightness of virtual regions that are not shadows of the background scenery as seen by a user. In short, the reference image of a background scenery is acquired, potentially globally reduced in brightness, and then reconfigured in such a way as to form an enhanced image of the background scenery in which virtual regions that are not shadows of an image of the background image. background have a higher relative brightness than the virtual shadow regions. The highlighted image of the background scenery is then displayed to the user on the real world background, along with the image of the virtual object from which the image of the virtual shadow appears to originate. The display of the enhanced image of the background scenery over the real world background effectively adds light to the desired virtual non-shade portions of the real world scenery background as perceived by the viewer. This can create the effect of an image of the virtual shadow merged into real-world objects in the background by the displayed virtual object. [0012] [00012] In some modalities, the increase in brightness of virtual regions that are not shadows of the highlighted image of the background scenery compared to the virtual shadow regions of the highlighted image of the background scenery can be of global extension, so that all regions virtual shadows are highlighted in relative comparison to virtual shadow regions. In other modalities, the increase in brightness can be local to areas adjacent to each shade. Additionally, to mitigate the effect of light added to the enhanced background image, in some embodiments, the display screen for viewing through it may comprise optical elements, such as a neutral density filter, to reduce the amount of light from background that passes through the transparent viewfinder screen. [0013] [00013] Figure 1 illustrates a usage environment for a modality of a transparent display screen device 100 in the form of a head mounted screen system. A user 102 using the transparent display screen device 100 is located in an environment, and is shown facing wall 104. For the sake of clarity, no physical object is shown on floor 106 or wall 104 of the environment inside the user’s field of view. However, it will be understood that the transparent display screen device100 can be used in virtually any environment, indoor or outdoor. [0014] [00014] Figure 2, shown from the perspective of the user 102, illustrates an image of an object 200 in the form of a magician displayed on the transparent display screen device 100, so that the object 200 appears to be located in the environment of empty view of Figure 1. Additionally, an image of virtual shadow 202 fused by object 200 is shown as if it were fused at the bottom of the real world. In the illustrated embodiment, the virtual shadow follows the contours of the floor 106 and wall 104 in the same way that the real shadow would be. However, other modalities can meet changes and variations in the contour of the bottom surface in other ways. It will be understood that the virtual shadow can have any suitable size, shape, intensity and direction, depending on the type and location of the virtual light source (s) used to calculate the appearance of the shadow. [0015] [00015] As mentioned above, an image of virtual shadow 202 can be generated by reconfiguring an image of the background scenery on top of the current background scenery so that a relative brightness of virtual regions that are not shadows is enhanced compared to the shadow regions image of the background scenery. Figure 3 shows a flow diagram illustrating an example of a method 300 of displaying shadow images in this way. [0016] [00016] Method 300 comprises, in 302, displaying an image of an object on the display screen for viewing through it, and in 304, acquiring the reference image of a background scenario that is within a field of view of a user of the transparent display screen device. The background image reference image can be processed in several ways to form a highlighted image of the background scenario. For example, in 305, method 300 comprises determining the location in the background of the virtual shadow generated by the displayed object. The location of the virtual shadow can be determined in any suitable way, including but not limited to using conventional real-time virtual shadow generation techniques used in computer graphics. It will be understood that the location and appearance of the virtual shadow can be dependent on a type (eg, parallel rays, diffuse light, etc.) and location (eg, upper, oblique, etc.) of a headset or light sources applied when determining the location of the virtual shadow, as well as the particular method of generating the virtual shadow used. [0017] [00017] As mentioned above, the location of the virtual shadow in the background image may be dependent on the structure of the usage environment. For example, the virtual shadow can change the direction and / or the shape depending on the shape and orientation of the surfaces in the usage environment in which the virtual shadow is to be merged. Therefore, in some modalities, image processing can be performed to determine the variations in the surface contour that can affect the appearance of the shadow, and the aforementioned variations in the contour can then be taken into account when determining the location of the virtual shadow. [0018] [00018] After determining the location of the virtual shadow, method 300 comprises, in 308, making an enhanced image of the background scenery by increasing the relative brightness of the background scenery in a non-virtual shadow region adjacent to the compared virtual shadow region to a glow in the virtual shadow region. This can involve several processes. For example, in some embodiments, the entire background image can first be darkened to form the base image, and then the desired virtual regions that are not shadows can be selectively lit in relation to the virtual shadow regions. Alternatively and / or in addition, the desired virtual shadow regions can be selectively darkened with respect to the virtual non-shadow regions. It will be understood that said pre-darkening of the original background image can be omitted where appropriate, such as in low light environments. It should further be understood that the amount by which the original background image is darkened can be fixed or variable, and can be selected or varied based on such factors as the amount of backlight that is removed by a density filter neutral, the intensity of the backlight in an environment of current or expected use, etc. [0019] [00019] In some embodiments, as indicated in 310, the relative brightness of the non-virtual shadow region of the image can be illuminated globally, so that a relative brightness of all virtual regions that are not shadows of the highlighted image of the background scenery are highlighted with respect to all virtual shadow regions. In other modalities, as indicated in 312, the relative brightness of virtual regions that are not shadows can be enhanced locally. In these modalities, a relative differential brightness can have a maximum located at or near the edge of the virtual shadow / non-shadow, and reduces from the maximum as a function of the distance from the virtual shadow region. The use of a sufficiently gradual brightness gradient (as a non-limiting example, over a 20-degree angled view can help make the bright spot around the virtual shadow less noticeable due to the lower sensitivity of the human visual system to gradual changes It will be understood that the referred brightness gradients can be more noticeable where they cross a border, for example, between different colors, textures, etc. in the background image due to the sensitivity of the human eye to changes in brightness over narrow bands Therefore, as indicated at 314, the increase in relative brightness of the non-virtual shadow region adjacent to the virtual shadow region can end in a sharp gradient (for example, that exceeds a preselected threshold gradient) in the background image . [0020] [00020] Where a relative brightness of virtual regions that are not shadows is highlighted globally, the addition of the enhanced image of the background scenery over the real world background can cause a noticeable brightness in some circumstances. To avoid the increase of undesirable brightness, the aforementioned neutral density filter and / or other suitable optical elements can be used to reduce the total brightness of the image that reaches the user's eyes when blocking some backlight. In modes in which the relative brightness is highlighted locally, a neutral density filter can be omitted, as the illuminated areas may not actually affect the overall brightness of the image as seen by the user. [0021] [00021] The brightness of the non-virtual shadow region can be enhanced by any suitable relative amount compared to the brightness of the virtual shadow region. It will be noted that shadows in the real world are generally not entirely black, but may instead have a relatively small reduction in intensity compared to adjacent virtual non-shadow areas due to bounced light and other light sources. environment. Therefore, in some modalities, the relative brightness differential between virtual shadow and non-virtual shadow regions can be relatively subtle. Examples of suitable brightness differences include, but are not limited to, 2-3 point differences [0022] [00022] Ambient backlight can have a different color distribution depending on the ambient light source in a viewing environment. For example, indoor ambient light may have a spectrum with peaks that correspond to the peak wavelengths of the light sources used in that indoor environment. Likewise, different outdoor locations may have different color distributions of ambient light due to environmental differences. For example, ambient light in an arid environment on a sunny day may have a stronger blue component than sunlight due to the bounced light wavelength distributions in those environments. [0023] [00023] Therefore, to help make the increase in relative brightness intensity look more natural, Method 300 may comprise, in 316, performing a color analysis to determine the color distribution in ambient light (for example, intensity v. wavelength), and increase the relative brightness based on the color intensity distribution. For example, if color analysis shows that ambient light has relatively uniform color intensity distributions, then virtual regions that are not shadows can be enhanced in intensity by adding white light to virtual regions that are not shadows. Likewise, if the color analysis shows that ambient light has a predominantly blue distribution, then similarly blue light can be added to virtual regions that are not shadows of the enhanced image of the background. [0024] [00024] Continuing with Figure 3, after forming the highlighted image of the background scenery, method 300 comprises, in 318, displaying the highlighted image of the background scenery along with the image of the object on the transparent display screen. In this way, the virtual shadow in the highlighted image of the background scenery may appear to be merged by the object displayed on the real world background. This can help to provide a depth perspective for the object, which can help to convey to give a sense that the displayed object does in fact exist in the usage environment. [0025] [00025] Depending on the processing speed of the transparent display screen device and the rate at which a user is changing perspective in the environment, the user's perspective may change fast enough that the highlighted image of the background is not sufficiently aligned to a perceived current background. Therefore, in some modalities, the transparent display screen device can be configured to track the user's movements, and in response, correct any misalignment between the highlighted image of the background scenery and the currently perceived real world background. This correction can be carried out, for example, by shifting the position of the highlighted image of the background scenery based on the movements tracked by the user during the period between the acquisition of the original image and display of the highlighted image of the background scenery. Said screening can be carried out in any suitable way, including but not limited to by means of motion sensors disposed on the transparent display screen device. [0026] [00026] The above described method modalities of displaying shadows on the transparent display screen can be used with any suitable transparent display screen device, including but not limited to the head-mounted display system of Figure 1. Figure 4 shows a block diagram of a mode of the transparent display screen device 100, and Figure 5 shows an example of physical mode of the transparent display screen device 100. The transparent display screen device 100 can comprise various sensors and output devices . For example, the transparent display screen device 100 may comprise a transparent display screen subsystem 400 having an imaging system 402 configured to produce images and display the images on the transparent display screen 403, for example, in the form of lenses. A 402 image production system can be configured to project images on the display screen for viewing through the same 403, to display images by means of image production elements embedded in the display screen for viewing through the same 403 (for example, transparent OLED screen), or in any other suitable mode. The transparent viewfinder screen 403 may comprise a neutral density filter 404 and / or other optical elements to reduce the amount of backlight reaching the viewer. Audio can be presented via one or more 405 speakers on the transparent display screen device 100. [0027] [00027] The transparent display screen device 100 may additionally comprise one or more 406 image sensors. The 406 image sensor (s) may include one or more outward-facing image sensors configured to acquire an image of a background scenario for processing in a highlighted image of the background scenario. Likewise, the image sensor (s) 406 may include eye tracking image sensors configured to acquire to allow the observer's eyeball to be tracked for various purposes, such as determining where to locate objects in the image displayed, detect user inputs made using eye gestures, etc. The transparent display screen device 100 may additionally comprise one or more microphones 407 to allow the use of voice commands as the user sends information. [0028] [00028] The transparent display screen device 100 may additionally comprise one or more motion sensors 408 to detect the movements of the observer's head when the observer is using the transparent display screen device 100. This may allow, for example, that a highlighted image of the background scenery is aligned with a current background view. Likewise, motion sensors 408 can also be employed as user information devices, so that a user can interact with the transparent display screen device 100 by means of neck and head gestures, or even body gestures. . The 408 motion sensors can be configured to detect any suitable movements of the user's head, including translational and / or tilting movements. It will be understood that the sensors illustrated in Figure 4 are shown for the purpose of example and are not intended to be limiting in any way, as any other suitable sensors and / or a combination of sensors can be used. [0029] [00029] The screen system mounted on the head 100 can take any suitable physical form. For example, in some embodiments, the head-mounted screen system 100 may be in the form of a pair of glasses as shown in Figure 5, in which the lenses may be light, or darkened by means of a neutral density filter or others suitable optical elements. In Figure 5, an outward facing image sensor 406 is shown located in the upper central location of the eyeglass frame, but it will be understood that an image sensor 406 can have any other suitable location. Additionally, it will be understood that, in other modalities, the head mounted screen system 100 can take any other suitable form, such as helmets, goggles, etc. in which a transparent viewfinder screen system is supported in front of an observer's eye or eyes. [0030] [00030] Returning to Figure 4, the transparent display screen device 100 additionally comprises a controller 410 having a logical subsystem 412 and a data retention subsystem 414 in communication with the various other transparent display screen device components 100. The data retention subsystem 414 comprises instructions stored therein that are executable by the logical subsystem 412, for example, to display images of objects and shadows formed by the object on the transparent display screen 403, as described above. [0031] [00031] It will be understood that controller 410 is shown in simplified form. It should also be understood that the transparent display screen device 100 can use any suitable computing architecture that can be used without departing from the scope of the description. [0032] [00032] Logical subsystem 412 can include one or more physical devices configured to execute one or more instructions. For example, logical subsystem 412 can be configured to execute one or more instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. These instructions can be implemented to perform a task, implement a type of data, transform the state of one or more devices, or otherwise reach the desired result. [0033] [00033] The logical subsystem 412 can include one or more processors that are configured to execute instructions from the software. In addition or alternatively, logical subsystem 412 may include one or more hardware machines or logical firmware configured to execute hardware or firmware instructions. The processors of the logical subsystem 412 can be single-core or multi-core, and the programs running on it can be configured for parallel or distributed processing. The logical subsystem 412 can optionally include individual components that are distributed across two or more devices, which can be remotely located and / or configured for coordinated processing. One or more aspects of logical subsystem 412 can be virtualized and executed by remotely accessible network computing devices configured in a cloud computing configuration. [0034] [00034] The data retention subsystem 414 may include one or more non-transitory physical devices configured to retain data and / or instructions executable by the logical subsystem to implement the methods and processes described herein. When said methods and processes are implemented, the state of the data retention subsystem 414 can be transformed (for example, to retain different data). [0035] [00035] The data retention subsystem 414 may include removable media and / or integrated devices. The data retention subsystem 414 may include memory and / or devices with one or more of the following characteristics: volatile, non-volatile, dynamic, static, read / write, read-only, random access, sequential access, addressable location, file addressable, and addressable content. In some embodiments, logical subsystem 412 and data retention subsystem 414 can be integrated into one or more common devices, such as an application-specific integrated circuit or a system on a chip. [0036] [00036] The data retention subsystem can additionally comprise storage medium capable of being read by removable computer 416, which can be used to store and / or transfer executable data and / or instructions to implement the methods and processes described herein. Storage medium capable of being read by removable computer 416 can represent any suitable type of storage medium, including but not limited to DVDs, CDs, HD-DVDs, Blu-Ray Discs, EEPROMs, * okp.9 line 9 tape drives, and / or floppy disks, among others. [0037] [00037] Controller 410 may additionally comprise a communication subsystem 418 configured to engage in communication mode with the transparent display screen device 100 with one or more other computing devices. The communication subsystem 418 may include wired and / or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem 418 can be configured for communication via a wireless telephone network, a wireless local area network, a wired local area network, a large wireless area network, a large wired network, etc. in some embodiments, the communication subsystem 418 may allow the transparent display screen device 100 to send and / or receive data, such as video data, game data, image data, etc. to and / or from other devices over a network such as the Internet. [0038] [00038] It should be noted that the data retention subsystem 414 includes one or more physical, non-transitory devices. In a different way, in some modalities the aspects of the instructions described here can be propagated in a transient mode by a pure signal (for example, an electromagnetic signal, an optical signal, etc.) that is not maintained by a physical device for at least finite duration. In addition, data and / or other forms of information that belong to the present description can be propagated by a pure signal. [0039] [00039] It should be understood that the configurations and / or approaches described here are of an exemplary nature, and that the referred specific modalities or examples should not be considered in a limiting sense, due to the fact that numerous variations are possible. The specific routines or methods described here can represent one or more of any number of processing strategies. As such, several illustrated acts can be performed in the illustrated sequences, in other sequences, in parallel, or in some cases omitted. Likewise, the order of the processes described above can be changed. [0040] [00040] The subject matter of the present invention includes all new, non-obvious combinations and subcombination of the various processes, systems and configurations, and other characteristics, functions, acts, and / or properties described herein, as well as any and all equivalents thereof.
权利要求:
Claims (14) [0001] Method (300) for operating a display device having a transparent display screen (403), characterized by the fact that it comprises the steps of: displaying (302) on the transparent display screen (403) an image (202) of an object (200); and while displaying the image (202) of the object (200), display an image of a shadow cast (202) by the object on the background backdrop (104) by means of: acquire an image of the background scenery; determine (306) a location of the shadow in the background image, render (308) an improved image of the background setting by increasing a relative brightness in a region adjacent to the shadow compared to a brightness within the shadow, and display (318) the enhanced image of the background scenery. [0002] Method, according to claim 1, characterized by the fact that it further comprises reducing an amount of natural light passing through the transparent display screen (403) by means of a neutral density filter. [0003] Method, according to claim 1, characterized by the fact that rendering (308) the improved image of the background scenery comprises not increasing the brightness within the shadow. [0004] Method according to claim 1, characterized by the fact that rendering (308) the improved image of the background scenery (104) comprises darkening the image of the background scenery (104) globally. [0005] Method according to claim 1, characterized by the fact that increasing the relative brightness in the region adjacent to the shade comprises increasing the relative brightness of non-shade regions globally. [0006] Method according to claim 1, characterized in that increasing the relative brightness in the region adjacent to the shadow comprises increasing the relative brightness locally so that a relative differential brightness decreases as a function of the distance from the shadow. [0007] Method according to claim 6, characterized by the fact that it still comprises detecting an edge in the image of the background scenery that exceeds a preselected limit gradient, and ending the increase in the relative brightness at the edge. [0008] Method according to claim 1, characterized in that it further comprises determining a color distribution of ambient light, and increasing (316) the relative brightness by increasing the relative brightness based on the color distribution of the ambient light. [0009] Method, according to claim 1, characterized by the fact that displaying (318) the image of the object (200) and the image of the object's shadow comprises displaying the image of the object (200) and the image of the object's shadow in a head mounted display system. [0010] Method, according to claim 1, characterized by the fact that rendering (308) the improved image of the background scenery comprises increasing the brightness in the region adjacent to the shadow while not increasing the brightness in the shadow. [0011] Method according to claim 1, characterized in that rendering (308) the enhanced image of the background scenery comprises darkening the image of the background scenery globally before illuminating the region adjacent to the shade. [0012] Head mounted display system (100), characterized by the fact that it comprises: a transparent display screen (403); an image production system (402) configured to display an image on the transparent display screen (403); an image sensor (406) configured to acquire an image of a background scenery (104); a logical subsystem (412); and a data retention subsystem (414) comprising instructions executable by the logical subsystem to cause the head mounted display screen to execute the method of any of the preceding claims. [0013] Head mounted display system according to claim 12, characterized in that the transparent display screen (403) further comprises a neutral density filter (404). [0014] Computer storage medium, characterized by the fact that it stores the steps of the method as defined in any one of claims 1 to 11.
类似技术:
公开号 | 公开日 | 专利标题 BR112014014198B1|2021-02-17|method for operating a display device, head-mounted display system and computer storage medium JP6730286B2|2020-07-29|Augmented Reality Object Follower US9147111B2|2015-09-29|Display with blocking image generation KR102257255B1|2021-05-26|Mixed reality spotlight US9734633B2|2017-08-15|Virtual environment generating system KR20160047426A|2016-05-02|Controlling brightness of a displayed image US20180314066A1|2018-11-01|Generating dimming masks to enhance contrast between computer-generated images and a real-world view US20130293531A1|2013-11-07|User perception of visual effects US10671157B2|2020-06-02|Vestibular anchoring Berning et al.2014|A study of depth perception in hand-held augmented reality using autostereoscopic displays US20180190019A1|2018-07-05|Augmented reality user interface visibility US10708597B2|2020-07-07|Techniques for extrapolating image frames JP2016519322A|2016-06-30|Display image brightness control US10542236B2|2020-01-21|Image display system with visual filter US11237413B1|2022-02-01|Multi-focal display based on polarization switches and geometric phase lenses US20210174472A1|2021-06-10|Virtual, augmented, and mixed reality systems and methods KR20210076931A|2021-06-24|Method and apparatus for modifying pairs of stereoscopic images US20210090323A1|2021-03-25|Rendering Computer-Generated Reality Text US20190080432A1|2019-03-14|Camera-based Transparent Display JP2015118578A|2015-06-25|Augmented reality information detail
同族专利:
公开号 | 公开日 US20130147826A1|2013-06-13| RU2627131C2|2017-08-03| EP2791911A4|2015-11-04| BR112014014198A2|2017-06-13| MX2014007101A|2014-07-28| JP2015509230A|2015-03-26| AU2012352273B2|2017-10-26| JP6023212B2|2016-11-09| EP2791911B1|2017-08-30| RU2014124160A|2015-12-20| CN103988234B|2017-05-03| CA2857510A1|2013-06-20| WO2013090474A1|2013-06-20| KR20140101406A|2014-08-19| US9311751B2|2016-04-12| EP2791911A1|2014-10-22| CN103988234A|2014-08-13| CA2857510C|2019-04-30| MX347984B|2017-05-22| IN2014CN03962A|2015-10-23| AU2012352273A1|2014-07-03| KR102004010B1|2019-07-25|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 JPH0778267A|1993-07-09|1995-03-20|Silicon Graphics Inc|Method for display of shadow and computer-controlled display system| CA2310114A1|1998-02-02|1999-08-02|Steve Mann|Wearable camera system with viewfinder means| US6614408B1|1998-03-25|2003-09-02|W. Stephen G. Mann|Eye-tap for electronic newsgathering, documentary video, photojournalism, and personal safety| JP2000284349A|1999-03-30|2000-10-13|Fuji Photo Film Co Ltd|Image pickup device| US6552731B1|1999-04-16|2003-04-22|Avid Technology, Inc.|Multi-tone representation of a digital image on a digital nonlinear editing system| EP1074943A3|1999-08-06|2004-03-24|Canon Kabushiki Kaisha|Image processing method and apparatus| JP2002049932A|2000-08-02|2002-02-15|Taito Corp|Method for displaying polygon model shadow| US6980697B1|2001-02-01|2005-12-27|At&T Corp.|Digitally-generated lighting for video conferencing applications| US20030014212A1|2001-07-12|2003-01-16|Ralston Stuart E.|Augmented vision system using wireless communications| US7301547B2|2002-03-22|2007-11-27|Intel Corporation|Augmented reality system| US20050007460A1|2003-07-08|2005-01-13|Stavely Donald J.|Systems and methods for counteracting lens vignetting| US7508455B2|2004-03-26|2009-03-24|Ross Video/Live Production Technology|Method, system, and device for automatic determination of nominal backing color and a range thereof| CN101002253A|2004-06-01|2007-07-18|迈克尔·A.·韦塞利|Horizontal perspective simulator| US7986351B2|2005-01-27|2011-07-26|Qualcomm Incorporated|Luma adaptation for digital image processing| ES2325374T3|2005-05-03|2009-09-02|Seac02 S.R.L.|INCREASED REALITY SYSTEM WITH IDENTIFICATION OF REAL MARKER OBJECT.| JP4093279B2|2005-10-24|2008-06-04|ソニー株式会社|Rear projection image display device| US7623137B1|2006-02-15|2009-11-24|Adobe Systems, Incorporated|Successive-convolution-compositing technique for rendering translucent surfaces| JP2007219082A|2006-02-15|2007-08-30|Canon Inc|Composite reality feeling display system| US8139059B2|2006-03-31|2012-03-20|Microsoft Corporation|Object illumination in a virtual environment| EP1865462A1|2006-06-09|2007-12-12|Carl Zeiss NTS GmbH|Method for binarising a digital graylevel image| KR100925419B1|2006-12-19|2009-11-06|삼성전자주식회사|Color Image Enhancement using Laplacian Pyramid and method thereof| JP4948218B2|2007-03-22|2012-06-06|キヤノン株式会社|Image processing apparatus and control method thereof| RU2375840C2|2007-04-19|2009-12-10|Владимир Семенович Леонов|Method of forming three-dimensional colour virtual video image and device for creating effect of reality for user | US20090225001A1|2007-11-06|2009-09-10|University Of Central Florida Research Foundation, Inc.|Hybrid Display Systems and Methods| US9082213B2|2007-11-07|2015-07-14|Canon Kabushiki Kaisha|Image processing apparatus for combining real object and virtual object and processing method therefor| JP5177515B2|2008-06-02|2013-04-03|アイシン精機株式会社|Peripheral recognition support system| JP5120987B2|2008-07-24|2013-01-16|トムソンライセンシング|Apparatus, method and system for image processing| US20100287500A1|2008-11-18|2010-11-11|Honeywell International Inc.|Method and system for displaying conformal symbology on a see-through display| JP2010145436A|2008-12-16|2010-07-01|Brother Ind Ltd|Head-mounted display| US9015638B2|2009-05-01|2015-04-21|Microsoft Technology Licensing, Llc|Binding users to a gesture based system and providing feedback to the users| US8325186B2|2009-05-21|2012-12-04|Sony Computer Entertainment America Llc|Method and apparatus for rendering shadows| DE102009037865A1|2009-08-18|2011-02-24|Schaeffler Technologies Gmbh & Co. Kg|Arrangement for determining position of e.g. shift rod, towards housing, in automated geared transmission of motor vehicle, has form spring element formed as screw spring and comprises different winding dimension| DE102009037835B4|2009-08-18|2012-12-06|Metaio Gmbh|Method for displaying virtual information in a real environment| JP5446003B2|2009-09-14|2014-03-19|国立大学法人東京大学|Virtual object shadow creation device, composite image creation device, virtual object shadow creation method, and program| US8405658B2|2009-09-14|2013-03-26|Autodesk, Inc.|Estimation of light color and direction for augmented reality applications| US20110213664A1|2010-02-28|2011-09-01|Osterhout Group, Inc.|Local advertising content on an interactive head-mounted eyepiece| US20110234631A1|2010-03-25|2011-09-29|Bizmodeline Co., Ltd.|Augmented reality systems| US8405680B1|2010-04-19|2013-03-26|YDreams S.A., A Public Limited Liability Company|Various methods and apparatuses for achieving augmented reality| US20120026079A1|2010-07-27|2012-02-02|Apple Inc.|Using a display abstraction to control a display| US8884984B2|2010-10-15|2014-11-11|Microsoft Corporation|Fusing virtual content into real content| US8872853B2|2011-12-01|2014-10-28|Microsoft Corporation|Virtual light in augmented reality| US9311751B2|2011-12-12|2016-04-12|Microsoft Technology Licensing, Llc|Display of shadows via see-through display| US9147111B2|2012-02-10|2015-09-29|Microsoft Technology Licensing, Llc|Display with blocking image generation|US8872853B2|2011-12-01|2014-10-28|Microsoft Corporation|Virtual light in augmented reality| US9311751B2|2011-12-12|2016-04-12|Microsoft Technology Licensing, Llc|Display of shadows via see-through display| WO2014156033A1|2013-03-26|2014-10-02|Seiko Epson Corporation|Head-mounted display device, control method of head-mounted display device, and display system| US9658454B2|2013-09-06|2017-05-23|Omnivision Technologies, Inc.|Eyewear display system providing vision enhancement| US9652892B2|2013-10-29|2017-05-16|Microsoft Technology Licensing, Llc|Mixed reality spotlight| WO2015099683A1|2013-12-23|2015-07-02|Empire Technology Development, Llc|Suppression of real features in see-through display| KR20160021607A|2014-08-18|2016-02-26|삼성전자주식회사|Method and device to display background image| KR102262566B1|2014-10-10|2021-06-07|엘지디스플레이 주식회사|Transparent display device| IL236243A|2014-12-14|2016-08-31|Elbit Systems Ltd|Visual perception enhancement of displayed color symbology| US20160247319A1|2015-02-20|2016-08-25|Andreas G. Nowatzyk|Selective occlusion system for augmented reality devices| US10134198B2|2016-04-19|2018-11-20|Adobe Systems Incorporated|Image compensation for an occluding direct-view augmented reality system| US10692288B1|2016-06-27|2020-06-23|Lucasfilm Entertainment Company Ltd.|Compositing images for augmented reality| KR102304701B1|2017-03-28|2021-09-24|삼성전자주식회사|Method and apparatus for providng response to user's voice input| US10922878B2|2017-10-04|2021-02-16|Google Llc|Lighting for inserted content| KR102004991B1|2017-12-22|2019-10-01|삼성전자주식회사|Image processing method and apparatus tereof| EP3531244A1|2018-02-26|2019-08-28|Thomson Licensing|Method, apparatus and system providing alternative reality environment| DE102018208739A1|2018-06-04|2019-12-05|Conti Temic Microelectronic Gmbh|Estimation of spatial coordinates of light sources in environmental images| CN111093102A|2019-12-30|2020-05-01|星络智能科技有限公司|Storage medium, interactive device and video highlight rendering playing method thereof| CN111182349A|2019-12-30|2020-05-19|星络智能科技有限公司|Storage medium, interactive device and video playing method thereof| CN111093112A|2019-12-30|2020-05-01|星络智能科技有限公司|Storage medium, interaction device and video shadow rendering and playing method thereof|
法律状态:
2017-12-12| B25A| Requested transfer of rights approved|Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC (US) | 2018-12-04| B06F| Objections, documents and/or translations needed after an examination request according art. 34 industrial property law| 2019-12-24| B06U| Preliminary requirement: requests with searches performed by other patent offices: suspension of the patent application procedure| 2020-12-08| B09A| Decision: intention to grant| 2021-02-17| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 12/12/2012, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US13/323,403|US9311751B2|2011-12-12|2011-12-12|Display of shadows via see-through display| US13/323,403|2011-12-12| PCT/US2012/069316|WO2013090474A1|2011-12-12|2012-12-12|Display of shadows via see-through display| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|